What if we have more than one variable that is evolving through time and space?
populations of two species that are changing through time, and interact with each other
pollutant concentrations in a diffusion-advection model where the two pollutants also interact with each other
growth of above-ground and below-ground carbon and nitrogen in a forest
household consumption and savings through time (when consumption depends on savings and vice versa)
These require several differential equations that have to be solved simultaneously
Systems of equations
can be ordinary (differentiating with respect to one variable (time or space))
partial if differentiating with respect to multiple variables (x,y,z, time)
We can estimate trajectories of systems of equations in much the same way that we used numerical integration for our ODE’s
Here again methods (especially for complicated parital derivative system of equations ) can be complicated
Call your engineering/math when you run into issues!
two variable dynamic models that have feedbacks between variables can create cyclical dynamics (and more complex )
Two ways to look at results
time series of each state variable
how state variables interact with each other
Predator-Prey models
A simple approach that assumes prey grow exponentially, with a fixed intrinsic growth rate
Analogs
As with diffusion, the basic form/ideas in this model can be applied elsewhere
economics
infectious disease spread
combustion
\(\frac{\partial prey}{\partial t} = r_{prey} * prey - \alpha * prey * pred\)
\(\frac{\partial pred}{\partial t} = eff * \alpha * pred * prey - mort * pred\)
Ordinary Differential Equation with two dependent variables
Still use ODE solve in R
Still code the derivative as a function
Use lists or vectors to bring in initial conditions for all dependent variables; (similar how we bring in multiple parameters to derivative definition function)
use with can help make it easier to code the use of parameters within the derivative definition function (see example below)
use lists to output derivatives for all dependent variable
## function (t, pop, pars)
## {
## with(as.list(c(pars, pop)), {
## dprey = rprey * prey - alpha * prey * pred
## dpred = eff * alpha * prey * pred - pmort * pred
## return(list(c(dprey, dpred)))
## })
## }
two variable dynamic models that have feedbacks between variables can create cyclical dynamics (and more complex )
Two ways to look at results
time series of each state variable (pred and prey)
how state variables interact with each other
## time prey pred
## [1,] 1 10.000000 1.000000
## [2,] 2 11.320956 1.564997
## [3,] 3 10.244569 2.482924
## [4,] 4 6.914805 3.420960
## [5,] 5 3.777091 3.836373
## [6,] 6 1.987468 3.710744
# rearrange for easy plotting
resl = as.data.frame(res) %>% pivot_longer(-time, names_to="animal", values_to="pop")
p1=ggplot(resl, aes(time, pop, col=animal))+geom_line()
p1# To make this easier to understand - maybe
p2b=ggplot(as.data.frame(res), aes(pred, prey, col=time))+geom_point()+labs(y="Prey",x="Predators")
p2b#Try other parameters - try to bring relative size of predictors (versus prey) higher
\(\frac{\partial prey}{\partial t} = r_{prey} * prey - \alpha * prey * pred\)
\(\frac{\partial pred}{\partial t} = eff * \alpha * pred * prey - mort * pred\)
Predator and Prey with Carrying Capacity?
How would you code that?
\(\frac{\partial prey}{\partial t} = r_{prey} * (1-\frac{prey}{K})*prey - \alpha * prey * pred\)
\(\frac{\partial pred}{\partial t} = eff * \alpha * pred * prey - mort * pred\)
## function (t, pop, pars)
## {
## with(as.list(c(pars, pop)), {
## dprey = rprey * (1 - prey/K) * prey - alpha * prey *
## pred
## dpred = eff * alpha * prey * pred - pmort * pred
## return(list(c(dprey, dpred)))
## })
## }
# initial conditions
currpop=c(prey=1, pred=1)
# set parameter list
pars = c(rprey=0.1, alpha=0.6, eff=0.8,pmort=0.4, K=20)
# times when you want to evaluate
days = seq(from=1,to=500)
# run our differential equation solver
res = ode(func=lotvmodK, y=currpop, times=days, parms=pars)
# rearrange for plotting
resl = as.data.frame(res) %>% pivot_longer(-time, names_to="species", values_to="pop")
# graph both populations over time
p1=ggplot(resl, aes(time, pop, col=species))+geom_line()
p1# also look at relationships between preditor and prey population and use color for time
# I will remove the legend here to make it easier to see
p2 = ggplot(as.data.frame(res), aes(pred, prey, col=(round(time/10))))+geom_point()+theme(legend.position = "none")
p2p2 = ggplot(as.data.frame(res), aes(pred, prey, col=as.factor(round(time/10))))+geom_point()+theme(legend.position = "none")
p2Species 1 (or Company 1)
\(\frac{\partial s_1}{\partial t} = r_{1} * s_1 * (1-(\frac{s_1+\alpha_{12} * s_2}{K_1}))\)
Species 2 (or Company 2)
\(\frac{\partial s_2}{\partial t} = r_{2} * s_2 * (1-(\frac{s_2+\alpha_{21} * s_1}{K_2}))\)
How might you explain what this is doing?
Species 1 (or Company 1)
\(\frac{\partial s_1}{\partial t} = r_{1} * s_1 * (1-(\frac{s_1+\alpha_{12} * s_2}{K_1}))\)
Species 2 (or Company 2)
\(\frac{\partial s_2}{\partial t} = r_{2} * s_2 * (1-(\frac{s_2+\alpha_{21} * s_1}{K_2}))\)
Consider pred-prey BUT what will be the output - if we want to ‘quantify sensitivity’ useful to look at a single value or set of value
For example
NOTE - I’ve also given you an example using Latin Hypercube Sampling * optional review
source("../R/lotvmodK.R")
# lets start with sobol
library(sensitivity)
# want to learn about sensitivity to growth rate (r) and carrying capacity
# set the number of parameters
np=200
K = rnorm(mean=150, sd=20, n=np)
rprey = runif(min=0.01, max=0.3, n=np)
alpha= runif(min=0.1, max=0.4, n=np)
eff= rnorm(mean=0.3, sd=0.01, n=np)
pmort= runif(min=0.01, max=0.45, n=np)
X1 = cbind.data.frame(rprey=rprey, K=K, alpha=alpha, eff=eff, pmort=pmort)
# repeat to get our second set of samples
np=200
K = rnorm(mean=150, sd=20, n=np)
rprey = runif(min=0.01, max=0.3, n=np)
alpha= runif(min=0.1, max=0.4, n=np)
eff= rnorm(mean=0.3, sd=0.01, n=np)
pmort= runif(min=0.01, max=0.45, n=np)
X2 = cbind.data.frame(rprey=rprey, K=K, alpha=alpha, eff=eff, pmort=pmort)
# create our sobel object and get sets ofparameters for running the model
sens_PP = sobolSalt(model = NULL,X1, X2, nboot = 300)
# name parameter sets...
colnames(sens_PP$X) = c("rprey","K","alpha","eff","pmort")
# our metrics
# lets say we want the maximum and minimum of both predictor and prey
compute_metrics = function(result) {
maxprey = max(result$prey)
maxpred = max(result$pred)
minprey = min(result$prey)
minpred = min(result$pred)
return(list(maxprey=maxprey, minprey=minprey, maxpred=maxpred, minpred=minpred))}
# build a wrapper function
p_wrapper = function(rprey,alpha, eff, pmort, K, currpop, days, func) {
parms = list(rprey=rprey, alpha=alpha, eff=eff, pmort=pmort, K=K)
result = ode(y=currpop, times=days, func=func, parms=parms)
colnames(result)=c("time","prey","pred")
# get metrics
metrics=compute_metrics(as.data.frame(result))
return(metrics)
}
# run our model for all parameters and extract the results
currpop=c(prey=1, pred=1)
days = seq(from=1,to=500)
allresults = as.data.frame(sens_PP$X) %>% pmap(p_wrapper, currpop=currpop, days=days, func=lotvmodK)
# take results back to unlisted form
allres = allresults %>% map_dfr(`[`,c("maxprey","minprey","maxpred","minpred"))
# range of response across parameter uncertainty
allresl = allres %>% gather(key="metric",value="pop")
ggplot(allresl, aes(metric, pop))+geom_boxplot()# dealing with different scales
ggplot(allresl, aes(metric, pop, col=metric))+geom_boxplot()+facet_wrap(~metric, scales="free")# plot cummulative densities
ggplot(allresl, aes(pop, col=metric))+stat_ecdf(geom="line")+facet_wrap(~metric, scales="free")# create sobol indices for Max Prey
sens_PP_maxprey = sens_PP %>% sensitivity::tell(y=allres$maxprey)
rownames(sens_PP_maxprey$S)=c("rprey","K","alpha","eff","pmort")
sens_PP_maxprey$S## original bias std. error min. c.i. max. c.i.
## rprey 0.03183015 0.001526934 0.06091984 -0.0871428 0.1471826
## K 0.02011888 0.001056463 0.06268305 -0.1051848 0.1344842
## alpha 0.40928403 -0.004943837 0.09649863 0.2462706 0.6131552
## eff 0.01657897 0.000982677 0.06247087 -0.1107414 0.1357176
## pmort 0.54727349 -0.001595660 0.04771356 0.4532608 0.6382406
## original bias std. error min. c.i. max. c.i.
## rprey 0.012349345 3.243570e-04 0.0028022463 0.0053030919 0.016364154
## K 0.001154020 -2.182323e-05 0.0006471949 -0.0003453104 0.002116712
## alpha 0.467861815 -1.025672e-03 0.0579236062 0.3583067443 0.582321117
## eff 0.003502894 6.578112e-05 0.0006738263 0.0017952034 0.004561289
## pmort 0.667120368 -1.954088e-03 0.0962965284 0.4794595302 0.855046984
Here’s an example using Latin HyperCube Sampling
With Partial Rank Correlation Coefficient
## Loading required package: survival
## Package epiR 2.0.61 is loaded
## Type help(epi.about) for summary information
## Type browseVignettes(package = 'epiR') to learn how to use epiR for applied epidemiological analyses
##
# create parameter sets...
factors = c("rprey","K","alpha","eff","pmort")
nsets=500
# create a latin hyper cube
sens_pp = randomLHS(nsets, length(factors))
colnames(sens_pp)=factors
# refine using sampling distributions for each parameter
sens_pp[,"rprey"]= qunif(sens_pp[,"rprey"], min=0.01, max=0.3)
sens_pp[,"K"]= qunif(sens_pp[,"K"], min=10, max=200)
sens_pp[,"alpha"]= qunif(sens_pp[,"alpha"], min=0.1, max=0.4)
sens_pp[,"eff"]= qnorm(sens_pp[,"eff"], mean=0.3, sd=0.01)
sens_pp[,"pmort"]= qunif(sens_pp[,"pmort"], min=0.05, max=0.45)
# lets create a metric and wrapper function as we did for our population models
# first our metrics
# lets say we want the maximum and minimum of both predictor and prey
compute_metrics = function(result) {
maxprey = max(result$prey)
maxpred = max(result$pred)
minprey = min(result$prey)
minpred = min(result$pred)
return(list(maxprey=maxprey, minprey=minprey, maxpred=maxpred, minpred=minpred))}
# build a wrapper function
p_wrapper = function(rprey,alpha, eff, pmort, K, currpop, days, func) {
parms = list(rprey=rprey, alpha=alpha, eff=eff, pmort=pmort, K=K)
result = ode(y=currpop, times=days, func=func, parms=parms)
colnames(result)=c("time","prey","pred")
# get metrics
metrics=compute_metrics(as.data.frame(result))
return(metrics)
}
# run our model for all parameters and extract the results
currpop=c(prey=1, pred=1)
days = seq(from=1,to=500)
allresults = as.data.frame(sens_pp) %>% pmap(p_wrapper, currpop=currpop, days=days, func=lotvmodK)
# take results back to unlisted form
allres = allresults %>% map_dfr(`[`,c("maxprey","minprey","maxpred","minpred"))
# range of response across parameter uncertainty
allresl = allres %>% gather(key="metric",value="pop")
ggplot(allresl, aes(metric, pop))+geom_boxplot()# dealing with different scales
ggplot(allresl, aes(metric, pop, col=metric))+geom_boxplot()+facet_wrap(~metric, scales="free")# plot cummulative densities
ggplot(allresl, aes(pop, col=metric))+stat_ecdf(geom="line")+facet_wrap(~metric, scales="free")# compute PRCCs using epi.prcc
# lets do first for maxpred
epi.prcc(cbind.data.frame(sens_pp, allres$maxpred))## var est lower upper test.statistic df p.value
## 1 rprey 0.61052928 0.5408005 0.6802580 17.202812 498 2.134851e-52
## 2 K 0.46890664 0.3911437 0.5466696 11.847276 498 1.066825e-28
## 3 alpha -0.84212897 -0.8896080 -0.7946500 -34.848336 498 1.173191e-135
## 4 eff 0.04578135 -0.0421684 0.1337311 1.022725 498 3.069344e-01
## 5 pmort 0.83426357 0.7857202 0.8828070 33.765881 498 7.387155e-131
## var est lower upper test.statistic df p.value
## 1 rprey 0.83481938 0.78635012 0.88328864 33.8400489 498 3.448977e-131
## 2 K -0.10159453 -0.18918105 -0.01400801 -2.2789664 498 2.309146e-02
## 3 alpha -0.78715593 -0.84145622 -0.73285563 -28.4815381 498 1.356102e-106
## 4 eff 0.04202849 -0.04593577 0.12999276 0.9387337 498 3.483226e-01
## 5 pmort 0.64755379 0.58046401 0.71464358 18.9637535 498 9.116234e-61
Lorenz Equations (for fluid dynamics), P, R, B are parameters (fixed values), x,y,z variables that change with time that describe how conveciton in the atmosphere works - a cell that is warmed from below and cooled from above
Developed by Meteorologist Edward Lorenz - early climate model development in 1960s
Lorenz equations are example of dynamic systems that can exhibit stable and chaotic states depending on parameters and initial conditions
Lets look at a Lorenz System with its interesting dynamics
# lorenze
source("../R/lorenz.R")
pars = list(a=10,b=28,c=8/3)
res = ode(func=lorenz, c(x=0.1,y=0,z=0), times=seq(0,50,by=0.01), parms=pars)
ggplot(as.data.frame(res), aes(x,y, col=time))+geom_point()resl = as.data.frame(res) %>% gather(key="var", value="value",-time)
ggplot(resl, aes(time, value, col=var))+geom_line()# try with different initial conditions
pars = list(a=15,b=28,c=8/4)
res = ode(func=lorenz, c(x=0.3,y=5,z=10), times=seq(0,50,by=0.01), parms=pars)
ggplot(as.data.frame(res), aes(x,y, col=time))+geom_point()+scale_colour_gradientn(colours = terrain.colors(10))ggplot(as.data.frame(res), aes(x,z, col=time))+geom_point()+scale_colour_gradientn(colours = terrain.colors(10))ggplot(as.data.frame(res), aes(y,z, col=time))+geom_point()+scale_colour_gradientn(colours = terrain.colors(10))resl = as.data.frame(res) %>% gather(key="var", value="value",-time)
ggplot(resl, aes(time, value, col=var))+geom_line()